Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Generative models for learning combinatorial structures have transformative impacts in many applications. However, existing approaches fail to offer efficient and accurate learning results. Because of the highly intractable nature of the gradient estimation of the learning objective subject to combinatorial constraints. Existing gradient estimation methods would easily run into exponential time/memory space, or incur huge estimation errors due to improper approximation. We develop NEural Lovasz Sampler (Nelson), a neural network based on Lov\'asz Local Lemma (LLL). We show it guarantees to generate samples satisfying combinatorial constraints from the distribution of the constrained Markov Random Fields model (MRF) under certain conditions. We further present a fully differentiable contrastive-divergence-based learning framework on constrained MRF (Nelson-CD). Meanwhile, Nelson-CD being fully differentiable allows us to take advantage of the parallel computing power of GPUs, resulting in great efficiency. Experimental results on three real-world combinatorial problems reveal that Nelson learns to generate 100% valid structures. In comparison, baselines either time out on large-size data sets or fail to generate valid structures, whereas Nelson scales much better with problem size. In addition, Nelson outperforms baselines in various learning metrics, such as log-likelihood and MAP scores.
translated by 谷歌翻译
复杂的系统在现实世界中无处不在,并且往往具有复杂且理解不足的动态。对于他们的控制问题,挑战是保证在这种肿的和陷入困境的环境中的准确性,鲁棒性和概括。幸运的是,复杂的系统可以分为人类认知似乎可以利用的多个模块化结构。受到一种新型控制方法的启发,提出了一种新颖的控制方法,是一种因果关系机制(CCMS),它提出了探索组合分裂和竞争的合作。我们的方法采用了层次强化学习理论(HRL),其中1)具有竞争意识的高级政策将整个复杂系统划分为多种功能机制,以及2)低级政策完成了每种机制的控制任务。特别是用于合作的级联控制模块有助于CCM的串联操作,并使用向前耦合的推理模块来恢复分区过程中丢失的耦合信息。在合成系统和现实世界的生物调节系统上,CCM方法即使有不可预测的随机噪声,CCM方法也可以达到稳健和最新的控制结果。此外,概括结果表明,重复使用准备的专业CCM有助于在具有不同混杂因素和动态的环境中表现良好。
translated by 谷歌翻译
数据驱动的预测方法可以有效,准确地将蛋白质序列转化为生物活性结构,对于科学研究和治疗发展非常有价值。使用共同进化信息确定准确的折叠格局是现代蛋白质结构预测方法的成功基础。作为最新的状态,AlphaFold2显着提高了准确性,而无需进行明确的共同进化分析。然而,其性能仍然显示出对可用序列同源物的强烈依赖。我们研究了这种依赖性的原因,并提出了一种元生成模型Evogen,以弥补较差的MSA靶标的Alphafold2的表现不佳。 Evogen使我们能够通过降低搜索的MSA或生成虚拟MSA来操纵折叠景观,并帮助Alphafold2在低数据表方面准确地折叠,甚至通过单序预测来实现令人鼓舞的性能。能够用很少的MSA做出准确的预测,不仅可以更好地概括为孤儿序列的Alphafold2,而且使其在高通量应用程序中的使用民主化。此外,Evogen与AlphaFold2结合产生了一种概率结构生成方法,该方法可以探索蛋白质序列的替代构象,并且序列生成的任务意识可区分算法将使包括蛋白质设计在内的其他相关任务受益。
translated by 谷歌翻译
最近,先驱研究工作提出了大量的声学特征(原木功率谱图,线性频率卷轴系数,恒定的q cepstral系数等),以进行音频深层检测,获得良好的性能,并表明不同的子带对音频有不同的贡献DeepFake检测。但是,这缺乏对子带中特定信息的解释,这些功能也丢失了诸如阶段之类的信息。受合成语音机制的启发,基本频率(F0)信息用于提高综合语音的质量,而合成语音的F0仍然太平均,这与真实语音的F0差异很大。可以预期,F0可以用作重要信息来区分真正的语言和虚假语音,而由于F0的分布不规则,因此不能直接使用此信息。相反,选择了大多数F0的频带作为输入特征。同时,为了充分利用相位和全频段信息,我们还建议使用真实和虚构的频谱图作为互补输入功能,并分别对Discoint子带进行建模。最后,融合了F0的结果,真实和假想的频谱图。 ASVSPOOF 2019 LA数据集的实验结果表明,我们所提出的系统对于音频DeepFake检测任务非常有效,达到等效错误率(EER)为0.43%,几乎超过了所有系统。
translated by 谷歌翻译
回归学习是经典的,是医学图像分析的基础。它为许多关键应用程序提供了连续的映射,例如属性估计,对象检测,分割和非刚性注册。但是,先前的研究主要以案例标准(如均方误差)为优化目标。他们忽略了非常重要的人口相关标准,这正是许多任务中的最终评估指标。在这项工作中,我们建议通过有关直接优化细粒相关损失的新型研究来重新审视经典回归任务。我们主要探索两个互补相关索引作为可学习的损失:Pearson线性相关(PLC)和Spearman等级相关性(SRC)。本文的贡献是两个折叠。首先,对于全球层面的PLC,我们提出了一项策略,以使其对异常值进行强大的态度并规范关键分布因素。这些努力显着稳定学习并扩大了PLC的功效。其次,对于本地级别的SRC,我们提出了一种粗到精细的方案,以减轻样品之间确切排名顺序的学习。具体而言,我们将样本排名的学习转换为样本之间相似关系的学习。我们在两个典型的超声图像回归任务上广泛验证了我们的方法,包括图像质量评估和生物措施测量。实验证明,通过直接优化相关性的细粒度指导,回归性能得到显着提高。我们提出的相关性损失是一般的,可以扩展到更重要的应用程序。
translated by 谷歌翻译
蛋白质是人类生命的重要组成部分,其结构对于功能和机制分析很重要。最近的工作表明了AI驱动方法对蛋白质结构预测的潜力。但是,新模型的开发受到数据集和基准测试培训程序的限制。据我们所知,现有的开源数据集远不足以满足现代蛋白质序列相关研究的需求。为了解决这个问题,我们介绍了具有高覆盖率和多样性的第一个百万级蛋白质结构预测数据集,称为PSP。该数据集由570K真实结构序列(10TB)和745K互补蒸馏序列(15TB)组成。此外,我们还提供了该数据集上SOTA蛋白结构预测模型的基准测试训练程序。我们通过参与客串比赛验证该数据集的实用程序进行培训,我们的模特赢得了第一名。我们希望我们的PSP数据集以及培训基准能够为AI驱动的蛋白质相关研究提供更广泛的AI/生物学研究人员社区。
translated by 谷歌翻译
未经监督的人重新识别(Reid)是一个具有挑战性的任务,没有数据注释,以指导歧视性学习。现有方法通过群集提取的嵌入式来尝试解决此问题以生成伪标签。然而,大多数方法忽略了摄像机样式方差引起的类内间隙,并且一些方法是相对复杂和间接的,尽管它们试图解决相机样式对特征分布的负面影响。为了解决这个问题,我们提出了一种相机感知的风格分离和对比学习方法(CA-Ureid),它直接将相机样式与设计的相机感知的注意模块直接分离在功能空间中。它可以将学习功能明确地将学习功能分为特定于相机和相机不可知的部件,从而降低了不同摄像机的影响。此外,为了进一步缩小相机的差距,我们设计了一个摄像机感知对比中心损失,以了解每个身份的更多歧视性嵌入。广泛的实验证明了我们对无监督者Reid任务的最先进方法的方法的优越性。
translated by 谷歌翻译
计算机生成的全息术(CGH)具有广泛的应用,如直视显示,虚拟和增强现实,以及光学显微镜。CGH通常利用显示计算机产生的相位掩模的空间光调制器,调制相干光的相位以产生定制图案。计算相位掩码的算法是CGH的核心,通常定制以满足不同的应用。用于光学显微镜的CGH通常需要3D可访问性(即,沿着$ Z $ -axis产生重叠模式)和微米级空间精度。这里,我们使用设计用于光学显微镜的无监督生成模型来提出CGH算法,以合成3D选择的照明。命名为稀疏深度CGH的算法,能够以比传统的CGH算法更高的对比度在大的3D容积中产生稀疏分布点。
translated by 谷歌翻译